Unified Expectation Maximization
نویسندگان
چکیده
We present a general framework containing a graded spectrum of Expectation Maximization (EM) algorithms called Unified Expectation Maximization (UEM.) UEM is parameterized by a single parameter and covers existing algorithms like standard EM and hard EM, constrained versions of EM such as ConstraintDriven Learning (Chang et al., 2007) and Posterior Regularization (Ganchev et al., 2010), along with a range of new EM algorithms. For the constrained inference step in UEM we present an efficient dual projected gradient ascent algorithm which generalizes several dual decomposition and Lagrange relaxation algorithms popularized recently in the NLP literature (Ganchev et al., 2008; Koo et al., 2010; Rush and Collins, 2011). UEM is as efficient and easy to implement as standard EM. Furthermore, experiments on POS tagging, information extraction, and word-alignment show that often the best performing algorithm in the UEM family is a new algorithm that wasn’t available earlier, exhibiting the benefits of the UEM framework.
منابع مشابه
Unsupervised Classification of Changes in Multispectral Satellite Imagery
The statistical techniques of multivariate alteration detection, maximum autocorrelation factor transformation, expectation maximization, fuzzy maximum likelihood estimation and probabilistic label relaxation are combined in a unified scheme to classify changes in multispectral satellite data. An example involving bitemporal LANDSAT TM imagery is given.
متن کاملA Unified Framework for Utility Maximization Problems: an Orlicz Space Approach
We consider a stochastic financial incomplete market where the price processes are described by a vector-valued semimartingale that is possibly nonlocally bounded. We face the classical problem of utility maximization from terminal wealth, with utility functions that are finite-valued over (a,∞), a ∈ [−∞,∞), and satisfy weak regularity assumptions. We adopt a class of trading strategies that al...
متن کاملParameter Estimation in Finite Mixture Models by Regularized Optimal Transport: A Unified Framework for Hard and Soft Clustering
In this short paper, we formulate parameter estimation for finite mixture models in the context of discrete optimal transportation with convex regularization. The proposed framework unifies hard and soft clustering methods for general mixture models. It also generalizes the celebrated k-means and expectation-maximization algorithms in relation to associated Bregman divergences when applied to e...
متن کاملInferring High-Level Behavior from Low-Level Sensors
We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler’s current mode of transportation as well as his most likely route, in an unsupervised manner. The model is implemented using particle filters and learned using Expectation-Maximization. The training data is d...
متن کاملA unified approach on fast training of feedforward and recurrent networks using EM algorithm
In this work, we provide a theoretical framework that unifies the notions of hidden representations and moving targets through the expectation and maximization (EM) algorithm. Based on such a framework, two fast training algorithms can be derived consistently for both feedforward networks and recurrent networks.
متن کامل